47 research outputs found
Almost-Everywhere Secure Computation with Edge Corruptions
We consider secure multi-party computation (MPC) in a setting where
the adversary can separately corrupt not only the parties (nodes) but
also the communication channels (edges), and can furthermore choose
selectively and adaptively which edges or nodes to corrupt. Note that
if an adversary corrupts an edge, even if the two nodes that share
that edge are honest, the adversary can control the link and thus
deliver wrong messages to both players. We consider this question in
the information-theoretic setting, and require security against a
computationally unbounded adversary.
In a fully connected network the above question is simple (and we
also provide an answer
that is optimal up to a constant factor). What makes the problem
more challenging is to consider the case of sparse networks.
Partially connected networks are far more realistic than fully
connected networks, which led Garay and Ostrovsky [Eurocrypt\u2708] to
formulate the notion of (unconditional) \emph{almost everywhere (a.e.)
secure computation} in the node-corruption model, i.e., a model in
which not all pairs of nodes are connected by secure channels and the
adversary can corrupt some of the nodes (but not the edges). In such a setting,
MPC amongst all honest nodes cannot be guaranteed due
to the possible poor connectivity of some honest nodes with other
honest nodes, and hence some of
them must be ``given up\u27\u27 and left out of the
computation. The number of such nodes is a function of the underlying
communication graph and the adversarial set of nodes.
In this work we introduce the notion of \emph{almost-everywhere secure
computation with edge corruptions}, which is exactly the same problem as
described above, except that we additionally allow the adversary to
completely control some of the communication channels between two
correct nodes---i.e., to ``corrupt\u27\u27 edges in the network. While it is
easy to see that an a.e. secure computation protocol for the original
node-corruption model is also an a.e. secure computation protocol tolerating
edge corruptions (albeit for a reduced fraction of edge corruptions
with respect to the bound for node corruptions), no polynomial-time
protocol is known in the case where a {\bf constant fraction} of the edges can be corrupted (i.e., the maximum that can be tolerated)
and the degree of the network is sub-linear.
We make progress on this front, by constructing graphs of degree
(for arbitrary constant ) on which we
can run a.e. secure computation protocols tolerating a constant fraction of
adversarial edges. The number of given-up nodes in our construction
is (for some constant that depends on the fraction
of corrupted edges), which is also asymptotically optimal
Block-Wise Non-Malleable Codes
Non-malleable codes, introduced by Dziembowski, Pietrzak, and Wichs (ICS\u2710) provide the guarantee that if a codeword c of a message m, is modified by a tampering function f to c\u27, then c\u27 either decodes to m or to "something unrelated" to m. In recent literature, a lot of focus has been on explicitly constructing such codes against a large and natural class of tampering functions such as split-state model in which the tampering function operates on different parts of the codeword independently.
In this work, we consider a stronger adversarial model called block-wise tampering model, in which we allow tampering to depend on more than one block: if a codeword consists of two blocks c = (c1, c2), then the first tampering function f1 could produce a tampered part c\u27_1 = f1(c1) and the second tampering function f2 could produce c\u27_2 = f2(c1, c2) depending on both c2 and c1. The notion similarly extends to multiple blocks where tampering of block ci could happen with the knowledge of all cj for j <= i. We argue this is a natural notion where, for example, the blocks are sent one by one and the adversary must send the tampered block before it gets the next block.
A little thought reveals that it is impossible to construct such codes that are non-malleable (in the standard sense) against such a powerful adversary: indeed, upon receiving the last block, an adversary could decode the entire codeword and then can tamper depending on the message. In light of this impossibility, we consider a natural relaxation called non-malleable codes with replacement which requires the adversary to produce not only related but also a valid codeword in order to succeed. Unfortunately, we show that even this relaxed definition is not achievable in the information-theoretic setting (i.e., when the tampering functions can be unbounded) which implies that we must turn our attention towards computationally bounded adversaries.
As our main result, we show how to construct a block-wise non-malleable code (BNMC) from sub-exponentially hard one-way permutations. We provide an interesting connection between BNMC and non-malleable commitments. We show that any BNMC can be converted into a nonmalleable
(w.r.t. opening) commitment scheme. Our techniques, quite surprisingly, give rise to a non-malleable commitment scheme (secure against so-called synchronizing adversaries), in which only the committer sends messages. We believe this result to be of independent interest. In the other direction, we show that any non-interactive non-malleable (w.r.t. opening) commitment can be used to construct BNMC only with 2 blocks. Unfortunately, such commitment scheme exists only under highly non-standard assumptions (adaptive one-way functions) and hence can not substitute our main construction
Constrained Pseudorandom Functions: Verifiable and Delegatable
Constrained pseudorandom functions (introduced independently by Boneh and Waters (CCS 2013), Boyle, Goldwasser, and Ivan (PKC 2014), and Kiayias, Papadopoulos, Triandopoulos, and Zacharias (CCS 2013)), are pseudorandom functions (PRFs) that allow the owner of the secret key to compute a constrained key , such that anyone who possesses can compute the output of the PRF on any input such that for some predicate . The security requirement of constrained PRFs state that the PRF output must still look indistinguishable from random for any such that .
Boneh and Waters show how to construct constrained PRFs for the class of bit-fixing as well as circuit predicates. They explicitly left open the question of constructing constrained PRFs that are delegatable - i.e., constrained PRFs where the owner of can compute a constrained key
for a further restrictive predicate . Boyle, Goldwasser, and Ivan left open the question of constructing constrained PRFs that are also verifiable. Verifiable random functions (VRFs), introduced by Micali, Rabin, and Vadhan (FOCS 1999), are PRFs that allow the owner of the
secret key to prove, for any input , that indeed is the output of the PRF on ; the security requirement of VRFs state that the PRF output must still look indistinguishable from random, for any for which a proof is not given.
In this work, we solve both the above open questions by constructing constrained pseudorandom functions that are simultaneously verifiable and delegatable
Information-theoretic Local Non-malleable Codes and their Applications
Error correcting codes, though powerful, are only applicable in scenarios where the adversarial channel does not introduce ``too many errors into the codewords. Yet,
the question of having guarantees even in the face of many errors is well-motivated. Non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs (ICS 2010), address precisely this question. Such codes guarantee that even if an adversary completely over-writes the codeword, he cannot transform it into a
codeword for a related message. Not only is this a creative solution to the problem mentioned above, it is also a very meaningful one. Indeed, non-malleable codes have inspired a rich body of theoretical constructions as well as applications to tamper-resilient cryptography, CCA2 encryption schemes and so on.
Another remarkable variant of error correcting codes were introduced by Katz and Trevisan (STOC 2000) when they explored the question of decoding ``locally . Locally decodable codes are coding schemes which have an additional ``local decode procedure: in order to decode a bit of the message, this procedure accesses only a few bits of the codeword. These codes too have received tremendous attention from researchers and have applications to various primitives in cryptography such as private information retrieval. More recently, Chandran, Kanukurthi and Ostrovsky (TCC 2014) explored the converse problem of making the ``re-encoding process local. Locally updatable codes have an additional ``local update procedure: in order to update a bit of the message, this procedure accesses/rewrites only a few bits of the codeword.
At TCC 2015, Dachman-Soled, Liu, Shi and Zhou initiated the study of locally decodable and updatable non-malleable codes, thereby combining all the important properties mentioned above into one tool. Achieving locality and non-malleability is non-trivial. Yet, Dachman-Soled \etal \ provide a meaningful definition of local non-malleability and provide a construction that satisfies it. Unfortunately, their construction is secure only in the computational setting.
In this work, we construct information-theoretic non-malleable codes which are locally updatable and decodable. Our codes are non-malleable against \s{F}_{\textsf{half}}, the class of tampering functions where each function is arbitrary but acts (independently) on two separate parts of the codeword. This is one of the strongest adversarial models for which explicit constructions of standard non-malleable codes (without locality) are known. Our codes have \bigo(1) rate and locality \bigo(\lambda), where is the security parameter. We also show a rate code with locality that is non-malleable against bit-wise tampering functions. Finally, similar to Dachman-Soled \etal, our work finds applications to information-theoretic secure RAM computation
Reducing Depth in Constrained PRFs: From Bit-Fixing to NC1
The candidate construction of multilinear maps by Garg, Gentry, and Halevi (Eurocrypt 2013) has lead to an explosion of new cryptographic constructions ranging from attribute-based encryption (ABE) for arbitrary polynomial size circuits, to program obfuscation, and to constrained pseudorandom functions (PRFs). Many of these constructions require k-linear maps for large k. In this work, we focus on the reduction of k in certain constructions of access control primitives that are based on k-linear maps; in particular, we consider the case of constrained PRFs and ABE. We construct the following objects:
- A constrained PRF for arbitrary circuit predicates based on (n+l_{OR}-1)-linear maps (where n is the input length and l_{OR} denotes the OR-depth of the circuit).
- For circuits with a specific structure, we also show how to construct such PRFs based on (n+l_{AND}-1)-linear maps (where l_{AND} denotes the AND-depth of the circuit).
We then give a black-box construction of a constrained PRF for NC1 predicates, from any bit-fixing constrained PRF that fixes only one of the input bits to 1; we only require that the bit-fixing PRF have certain key homomorphic properties. This construction is of independent interest as it sheds light on the hardness of constructing constrained PRFs even for ``simple\u27\u27 predicates such as bit-fixing predicates.
Instantiating this construction with the bit-fixing constrained PRF from Boneh and Waters (Asiacrypt 2013) gives us a constrained PRF for NC1 predicates that is based only on n-linear maps, with no dependence on the predicate. In contrast, the previous constructions of constrained PRFs (Boneh and Waters, Asiacrypt 2013) required (n+l+1)-linear maps for circuit predicates (where l is the total depth of the circuit) and n-linear maps even for bit-fixing predicates.
We also show how to extend our techniques to obtain a similar improvement in the case of ABE and construct ABE for arbitrary circuits based on (l_{OR}+1)-linear (respectively (l_{AND}+1)-linear) maps
Hierarchical Functional Encryption
Functional encryption provides fine-grained access control for encrypted data, allowing each user to learn only specific functions of the encrypted data. We study the notion of hierarchical functional encryption, which augments functional encryption with delegation capabilities, offering significantly more expressive access control.
We present a generic transformation that converts any general-purpose public-key functional encryption scheme into a hierarchical one without relying on any additional assumptions. This significantly refines our understanding of the power of functional encryption, showing that the existence of functional encryption is equivalent to that of its hierarchical generalization.
Instantiating our transformation with the existing functional encryption schemes yields a variety of hierarchical schemes offering various trade-offs between their delegation capabilities (i.e., the depth and width of their hierarchical structures) and underlying assumptions. When starting with a scheme secure against an unbounded number of collusions, we can support arbitrary hierarchical structures. In addition, even when starting with schemes that are secure against a bounded number of collusions (which are known to exist under rather minimal assumptions such as the existence of public-key encryption and shallow pseudorandom generators), we can support hierarchical structures of bounded depth and width
Efficient ML Models for Practical Secure Inference
ML-as-a-service continues to grow, and so does the need for very strong
privacy guarantees. Secure inference has emerged as a potential solution,
wherein cryptographic primitives allow inference without revealing users'
inputs to a model provider or model's weights to a user. For instance, the
model provider could be a diagnostics company that has trained a
state-of-the-art DenseNet-121 model for interpreting a chest X-ray and the user
could be a patient at a hospital. While secure inference is in principle
feasible for this setting, there are no existing techniques that make it
practical at scale. The CrypTFlow2 framework provides a potential solution with
its ability to automatically and correctly translate clear-text inference to
secure inference for arbitrary models. However, the resultant secure inference
from CrypTFlow2 is impractically expensive: Almost 3TB of communication is
required to interpret a single X-ray on DenseNet-121. In this paper, we address
this outstanding challenge of inefficiency of secure inference with three
contributions. First, we show that the primary bottlenecks in secure inference
are large linear layers which can be optimized with the choice of network
backbone and the use of operators developed for efficient clear-text inference.
This finding and emphasis deviates from many recent works which focus on
optimizing non-linear activation layers when performing secure inference of
smaller networks. Second, based on analysis of a bottle-necked convolution
layer, we design a X-operator which is a more efficient drop-in replacement.
Third, we show that the fast Winograd convolution algorithm further improves
efficiency of secure inference. In combination, these three optimizations prove
to be highly effective for the problem of X-ray interpretation trained on the
CheXpert dataset.Comment: 10 pages include references, 4 figure
Functional Encryption: Decentralised and Delegatable
Recent advances in encryption schemes have allowed us to go far beyond point to point encryption, the scenario typically envisioned in public key encryption. In particular, Functional Encryption (FE) allows an authority to provide users with keys corresponding to various functions, such that a user with a secret key corresponding to a function , can compute (and only that) from a cipher-text that encrypts .
While FE is a very powerful primitive, a key downside is the requirement of a central point of trust. FE requires the assumption of a central trusted authority which performs the system setup as well as manages the credentials of every party in the system on an ongoing basis. This is in contrast to public key infrastructure which may have multiple certificate authorities and allows a party to have different (and varying) level of trust in them.
\\ \\
In this work, we address this issue of trust in two ways:
\begin{itemize}
\item First, we ask how realistic it is to have a central authority that manages all credentials and is trusted by everyone? For example, one may need to either obtain the permission of an income tax official or the permission of the police department and a court judge in order to be able to obtain specific financial information of a user from encrypted financial data. Towards that end, we introduce a new primitive that we call {\em Multi-Authority Functional Encryption} (MAFE) as a generalization of both Functional Encryption and Multi-Authority Attribute-Based Encryption (MABE). We show how to obtain MAFE for arbitrary polynomial-time computations based on
subexponentially secure indistinguishability obfuscation and injective one-way functions.
\item Second, we consider the notion of \emph{delegatable} functional encryption where any user in the system may independently act as a key generation authority. In delegatable FE, any user may derive a decryption key for a policy which is ``more restrictive than its own. Thus, in delegatable functional encryption, keys
can be generated in a hierarchical way, instead of directly by a central authority. In contrast to MAFE, however, in a delegatable FE scheme, the trust still ``flows\u27\u27 outward from the central authority.
\end{itemize}
Finally, we remark that our techniques are of independent interest: we construct FE in arguably a more natural way where a decryption key for a function is simply a signature on . Such a direct approach allows us to obtain a construction with interesting properties enabling multiple authorities as well as delegation
Public-Key Encryption with Efficient Amortized Updates
Searching and modifying public-key encrypted data (without having the decryption key) has received a lot of attention in recent literature. In this paper we re-visit this important problem and achieve much better amortized communication-complexity bounds. Our solution resolves the main open question posed by Boneh at al., \cite{BKOS07}.
First, we consider the following much simpler to state problem (which turns out to be central for the above): A server holds a copy of Alice\u27s database that has been encrypted under Alice\u27s public key. Alice would like to allow other users in the system to replace a
bit of their choice in the server\u27s database by communicating directly with the server, despite other users not having Alice\u27s private key. However, Alice requires that the server should not know
which bit was modified. Additionally, she requires that the
modification protocol should have ``small communication complexity
(sub-linear in the database size). This task is referred to as
private database modification, and is a central tool in building a more general protocol for modifying and searching over public-key encrypted data with small communication complexity. The problem was first considered by Boneh at al., \cite{BKOS07}. The protocol of \cite{BKOS07} to modify bit of an -bit database has communication complexity . Naturally, one can ask if we can improve upon this. Unfortunately, \cite{OS08} give evidence to the contrary, showing that using current algebraic techniques, this is not possible to do. In this paper, we ask the following question: what is the communication complexity when modifying bits of an -bit database? Of course, one can achieve naive communication complexity of by simply repeating the protocol of \cite{BKOS07}, times. Our main result is a private database modification protocol to modify bits of an -bit database that has communication complexity
, where
is a constant. (We remark that in contrast with recent work of Lipmaa \cite{L08} on the same topic, our database size {\em does not grow} with every update, and stays exactly the same size.)
As sample corollaries to our main result, we obtain the following:
\begin{itemize}
\item First, we apply our private database modification protocol to
answer the main open question of \cite{BKOS07}. More specifically, we
construct a public key encryption scheme supporting PIR queries that
allows every message to have a non-constant number of keywords
associated with it.
\item Second, we show that one can apply our
techniques to obtain more efficient communication complexity when
parties wish to increment or decrement multiple cryptographic
counters (formalized by Katz at al. ~\cite{KMO01}).
\end{itemize}
We believe that ``public-key encrypted\u27\u27 amortized database modification is an important cryptographic primitive in it\u27s own right and will be a useful in other applications
Privacy Amplification with Asymptotically Optimal Entropy Loss
We study the problem of ``privacy amplification\u27\u27: key agreement
between two parties who both know a weak secret w, such as a
password. (Such a setting is ubiquitous on the internet, where
passwords are the most commonly used security device.) We assume
that the key agreement protocol is taking place in the presence of
an active computationally unbounded adversary Eve. The adversary may
have partial knowledge about w, so we assume only that w has
some entropy from Eve\u27s point of view. Thus, the goal of the
protocol is to convert this non-uniform secret w into a uniformly
distributed string that is fully secret from Eve. R may then
be used as a key for running symmetric cryptographic protocols (such
as encryption, authentication, etc.).
Because we make no computational assumptions, the entropy in R can
come only from w. Thus such a protocol must minimize the entropy
loss during its execution, so that R is as long as possible. The
best previous results have entropy loss of , where
is the security parameter, thus requiring the password to
be very long even for small values of . In this work, we
present the first protocol for information-theoretic key agreement
that has entropy loss LINEAR in the security parameter. The
result is optimal up to constant factors. We achieve our improvement
through a somewhat surprising application of error-correcting codes
for the edit distance.
The protocol can be extended to provide also ``information
reconciliation,\u27\u27 that is, to work even when the two parties have slightly different versions of w (for example, when biometrics are involved)